• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

Çмú´ëȸ ÇÁ·Î½Ãµù

Ȩ Ȩ > ¿¬±¸¹®Çå > Çмú´ëȸ ÇÁ·Î½Ãµù > Çѱ¹Á¤º¸°úÇÐȸ Çмú´ëȸ > KCC 2021

KCC 2021

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) When Learning-Based Caching at the Edge Meets Data Market
¿µ¹®Á¦¸ñ(English Title) When Learning-Based Caching at the Edge Meets Data Market
ÀúÀÚ(Author) Kyi Thar   Ki Tae Kim   Ye Lin Tun   Chu Myaet Thwal   Choong Seon Hong  
¿ø¹®¼ö·Ïó(Citation) VOL 48 NO. 01 PP. 1562 ~ 1564 (2021. 06)
Çѱ۳»¿ë
(Korean Abstract)
¿µ¹®³»¿ë
(English Abstract)
Caching popular contents at edge nodes such as base stations is a promising solution to improve the user¡¯s quality of services, and to reduce the network traffic. It is very challenging to correctly predict future popularity of contents and decide which contents to store in the cache. Recently, with big data and high computing power, deep learning models have achieved high accuracy in prediction problems. Hence, we use deep learning where the training phase is executed at the cloud datacenters that can handle huge amount of raw data. Then, the trained model is transferred to the base station to predict the content¡¯s popularity and make cache decision. Thus, there is a trade-off between training cost at the cloud data center and caching gain at the base station. In this paper, we investigate this trade-off and propose a learning based caching scheme. Then, we implement and train the prediction model using Tensorflow libraries, and test the performance of caching scheme with a python based simulator.
Å°¿öµå(Keyword)
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå